adversarially robust few-shot learning
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
Previous work on adversarially robust neural networks for image classification requires large training sets and computationally expensive training procedures. On the other hand, few-shot learning methods are highly vulnerable to adversarial examples. The goal of our work is to produce networks which both perform well at few-shot classification tasks and are simultaneously robust to adversarial examples. We develop an algorithm, called Adversarial Querying (AQ), for producing adversarially robust meta-learners, and we thoroughly investigate the causes for adversarial vulnerability. Moreover, our method achieves far superior robust performance on few-shot image classification tasks, such as Mini-ImageNet and CIFAR-FS, than robust transfer learning.
Review for NeurIPS paper: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
Weaknesses: [Edit after Author Response] I thank the authors for acknowledging the suggestion for merging the tables, captioning and moving Algorithm 1 to the Appendix. The author response does not elaborate or explain too much on this, but rather states the observations from Table 8. 2) While I thank the authors for performing more experiments on state of the art meta-learning approaches like MCT and mentioning that AQ on MCT reduces the drop of natural accuracy, the current results in the paper using other meta-learning approaches do have a large drop in the natural accuracy. This certainly diminishes the practical use of AQ. ------- I agree that the paper does have some good positive points. However I am slightly inclined towards a rejection currently primarily due to the following reasons: 1) The core idea of this paper is very simple and straightforward. Though the authors justify that they are the first to do it, I am unsure whether this work might count as a novel enough contribution for the NeurIPS community. In contrast, from results in Table 4,5 vs Table 2, it appears that using AQ causes a big drop (sometimes almost 15-20%) in the natural accuracy.
Review for NeurIPS paper: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
The submission proposes a method called adversarial querying (AQ) to tackle the problem of adversarial robustness in few-shot learning. Adversarial querying works by applying an adversarial perturbation to the query set when meta-training in an effort to find a few-shot learner parameterization which is robust to adversarial attacks when tuned on the support set of a given learning problem. Results in the paper show that naturally trained few-shot learners are very sensitive to adversarial attacks. Adversarial robustness results are presented for a variety of benchmarks (mini-ImageNet, CIFAR-FS, Omniglot) and learners (Prototypical Networks, R2-D2, MetaOptNet, MAML). The proposed approach is shown to yield better adversarial robustness than competing approaches (transfer learning from an adversarially-trained backbone, ADML) while maintaining a better clean accuracy.
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
Previous work on adversarially robust neural networks for image classification requires large training sets and computationally expensive training procedures. On the other hand, few-shot learning methods are highly vulnerable to adversarial examples. The goal of our work is to produce networks which both perform well at few-shot classification tasks and are simultaneously robust to adversarial examples. We develop an algorithm, called Adversarial Querying (AQ), for producing adversarially robust meta-learners, and we thoroughly investigate the causes for adversarial vulnerability. Moreover, our method achieves far superior robust performance on few-shot image classification tasks, such as Mini-ImageNet and CIFAR-FS, than robust transfer learning.